29 research outputs found
Bayesian Design of Tandem Networks for Distributed Detection With Multi-bit Sensor Decisions
We consider the problem of decentralized hypothesis testing under
communication constraints in a topology where several peripheral nodes are
arranged in tandem. Each node receives an observation and transmits a message
to its successor, and the last node then decides which hypothesis is true. We
assume that the observations at different nodes are, conditioned on the true
hypothesis, independent and the channel between any two successive nodes is
considered error-free but rate-constrained. We propose a cyclic numerical
design algorithm for the design of nodes using a person-by-person methodology
with the minimum expected error probability as a design criterion, where the
number of communicated messages is not necessarily equal to the number of
hypotheses. The number of peripheral nodes in the proposed method is in
principle arbitrary and the information rate constraints are satisfied by
quantizing the input of each node. The performance of the proposed method for
different information rate constraints, in a binary hypothesis test, is
compared to the optimum rate-one solution due to Swaszek and a method proposed
by Cover, and it is shown numerically that increasing the channel rate can
significantly enhance the performance of the tandem network. Simulation results
for -ary hypothesis tests also show that by increasing the channel rates the
performance of the tandem network significantly improves
Rate Allocation for Decentralized Detection in Wireless Sensor Networks
We consider the problem of decentralized detection where peripheral nodes
make noisy observations of a phenomenon and send quantized information about
the phenomenon towards a fusion center over a sum-rate constrained multiple
access channel. The fusion center then makes a decision about the state of the
phenomenon based on the aggregate received data. Using the Chernoff information
as a performance metric, Chamberland and Veeravalli previously studied the
structure of optimal rate allocation strategies for this scenario under the
assumption of an unlimited number of sensors. Our key contribution is to extend
these result to the case where there is a constraint on the maximum number of
active sensors. In particular, we find sufficient conditions under which the
uniform rate allocation is an optimal strategy, and then numerically verify
that these conditions are satisfied for some relevant sensor design rules under
a Gaussian observation model.Comment: Accepted at SPAWC 201
Family Watchdog
We consider a distributed detection system under communication constraints, where several peripheral nodes observe a common phenomenon and send their observations to a fusion center via error-free but rate-constrained channels. Using the minimum expected error probability as a design criterion, we propose a cyclic procedure for the design of the peripheral nodes using the person-by-person methodology. It is shown that a fine-grained binning idea together with a method for updating the conditional probabilities of the joint index space at the fusion center, decrease the complexity of the algorithm and make it tractable. Also, unlike previous methods which use dissimilarity measures (e.g., the Bhattacharyya distance), a-prior hypothesis probabilities are allowed to contribute to the design in the proposed method. The performance of the proposed method is comparedto a method due to Longo et al.’s and it is shown that the new method can significantly outperform the previous one at a comparable complexity.QC 20141203</p
Achieving a vanishing SNR-gap to exact lattice decoding at a subexponential complexity
The work identifies the first lattice decoding solution that achieves, in the
general outage-limited MIMO setting and in the high-rate and high-SNR limit,
both a vanishing gap to the error-performance of the (DMT optimal) exact
solution of preprocessed lattice decoding, as well as a computational
complexity that is subexponential in the number of codeword bits. The proposed
solution employs lattice reduction (LR)-aided regularized (lattice) sphere
decoding and proper timeout policies. These performance and complexity
guarantees hold for most MIMO scenarios, all reasonable fading statistics, all
channel dimensions and all full-rate lattice codes.
In sharp contrast to the above manageable complexity, the complexity of other
standard preprocessed lattice decoding solutions is shown here to be extremely
high. Specifically the work is first to quantify the complexity of these
lattice (sphere) decoding solutions and to prove the surprising result that the
complexity required to achieve a certain rate-reliability performance, is
exponential in the lattice dimensionality and in the number of codeword bits,
and it in fact matches, in common scenarios, the complexity of ML-based
solutions. Through this sharp contrast, the work was able to, for the first
time, rigorously quantify the pivotal role of lattice reduction as a special
complexity reducing ingredient.
Finally the work analytically refines transceiver DMT analysis which
generally fails to address potentially massive gaps between theory and
practice. Instead the adopted vanishing gap condition guarantees that the
decoder's error curve is arbitrarily close, given a sufficiently high SNR, to
the optimal error curve of exact solutions, which is a much stronger condition
than DMT optimality which only guarantees an error gap that is subpolynomial in
SNR, and can thus be unbounded and generally unacceptable in practical
settings.Comment: 16 pages - submission for IEEE Trans. Inform. Theor
DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models
The work identifies the first general, explicit, and non-random MIMO
encoder-decoder structures that guarantee optimality with respect to the
diversity-multiplexing tradeoff (DMT), without employing a computationally
expensive maximum-likelihood (ML) receiver. Specifically, the work establishes
the DMT optimality of a class of regularized lattice decoders, and more
importantly the DMT optimality of their lattice-reduction (LR)-aided linear
counterparts. The results hold for all channel statistics, for all channel
dimensions, and most interestingly, irrespective of the particular lattice-code
applied. As a special case, it is established that the LLL-based LR-aided
linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal
decoding of any lattice code at a worst-case complexity that grows at most
linearly in the data rate. This represents a fundamental reduction in the
decoding complexity when compared to ML decoding whose complexity is generally
exponential in rate.
The results' generality lends them applicable to a plethora of pertinent
communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI,
cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality
of the LR-aided linear decoder is guaranteed. The adopted approach yields
insight, and motivates further study, into joint transceiver designs with an
improved SNR gap to ML decoding.Comment: 16 pages, 1 figure (3 subfigures), submitted to the IEEE Transactions
on Information Theor